While data-driven predictive models are a strictly technological construct, they may operate within a social context in which benign engineering choices entail implicit, indirect and unexpected real-life consequences. Fairness of such systems -- pertaining both to individuals and groups -- is one relevant consideration in this space; it surfaces when data capture protected characteristics upon which people may be discriminated. To date, this notion has predominantly been studied for a fixed predictive model, often under different classification thresholds, striving to identify and eradicate undesirable, and possibly unlawful, aspects of its operation. Here, we backtrack on this assumption to propose and explore a novel definition of fairness where individuals can be harmed when one predictor is chosen ad hoc from a group of equally-well performing models, i.e., in view of utility-based model multiplicity. Since a person may be classified differently across models that are otherwise considered equivalent, this individual could argue for a predictor with the most favourable outcome, employing which may have adverse effects on others. We introduce this scenario with a two-dimensional example based on linear classification; then, we investigate its analytical properties in a broader context; and, finally, we present experimental results on data sets that are popular in fairness studies. Our findings suggest that such unfairness can be found in real-life situations and may be difficult to mitigate by technical means alone, as doing so degrades certain metrics of predictive performance.
translated by 谷歌翻译